Goto

Collaborating Authors

 independent component analysis


Reservoir Subspace Injection for Online ICA under Top-n Whitening

Xiao, Wenjun, Bi, Yuda, Calhoun, Vince D

arXiv.org Machine Learning

Reservoir expansion can improve online independent component analysis (ICA) under nonlinear mixing, yet top-$n$ whitening may discard injected features. We formalize this bottleneck as \emph{reservoir subspace injection} (RSI): injected features help only if they enter the retained eigenspace without displacing passthrough directions. RSI diagnostics (IER, SSO, $ρ_x$) identify a failure mode in our top-$n$ setting: stronger injection increases IER but crowds out passthrough energy ($ρ_x: 1.00\!\rightarrow\!0.77$), degrading SI-SDR by up to $2.2$\,dB. A guarded RSI controller preserves passthrough retention and recovers mean performance to within $0.1$\,dB of baseline $1/N$ scaling. With passthrough preserved, RE-OICA improves over vanilla online ICA by $+1.7$\,dB under nonlinear mixing and achieves positive SI-SDR$_{\mathrm{sc}}$ on the tested super-Gaussian benchmark ($+0.6$\,dB).


A 1/R Law for Kurtosis Contrast in Balanced Mixtures

Bi, Yuda, Xiao, Wenjun, Bai, Linhao, Calhoun, Vince D

arXiv.org Machine Learning

Abstract--Kurtosis-based Independent Component Analysis (ICA) weakens in wide, balanced mixtures. We also show that purification--selecting m R sign-consistent sources--restores R-independent contrast Ω(1/m), with a simple data-driven heuristic. Synthetic experiments validate the predicted decay, the T crossover, and contrast recovery. Independent Component Analysis (ICA) recovers statistically independent latent sources from linear mixtures and is identifiable whenever at most one source is Gaussian [1]. Excess kurtosis--the standardized fourth cumulant--is a central contrast function [9], and kurtosis-type nonlinearities remain standard in FastICA.







Deep Deterministic Nonlinear ICA via Total Correlation Minimization with Matrix-Based Entropy Functional

Li, Qiang, Yu, Shujian, Ma, Liang, Ma, Chen, Liu, Jingyu, Adali, Tulay, Calhoun, Vince D.

arXiv.org Machine Learning

Blind source separation, particularly through independent component analysis (ICA), is widely utilized across various signal processing domains for disentangling underlying components from observed mixed signals, owing to its fully data-driven nature that minimizes reliance on prior assumptions. However, conventional ICA methods rely on an assumption of linear mixing, limiting their ability to capture complex nonlinear relationships and to maintain robustness in noisy environments. In this work, we present deep deterministic nonlinear independent component analysis (DDICA), a novel deep neural network-based framework designed to address these limitations. DDICA leverages a matrix-based entropy function to directly optimize the independence criterion via stochastic gradient descent, bypassing the need for variational approximations or adversarial schemes. This results in a streamlined training process and improved resilience to noise. We validated the effectiveness and generalizability of DDICA across a range of applications, including simulated signal mixtures, hyperspectral image unmixing, modeling of primary visual receptive fields, and resting-state functional magnetic resonance imaging (fMRI) data analysis. Experimental results demonstrate that DDICA effectively separates independent components with high accuracy across a range of applications. These findings suggest that DDICA offers a robust and versatile solution for blind source separation in diverse signal processing tasks.


Enhancing diffusion models with Gaussianization preprocessing

Cunzhi, Li, Kang, Louis, Shimazaki, Hideaki

arXiv.org Machine Learning

Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020) have emerged as one of the most powerful classes of generative models for high-dimensional data, achieving state-of-the-art performance in image synthesis (Dhariwal and Nichol, 2021; Rombach et al., 2022) and other tasks such as action generation in robotic or protein design (Watson et al., 2023; Chi et al., 2025). However, sampling from these models is typically slow: many reverse-time steps are required to transform an initial Gaussian sample into a high-quality sample in data space (Ho et al., 2020; Song et al., 2020). This computational cost is especially problematic, and it restricts the practical deployment of diffusion models in real-time or resource-constrained settings (Salimans and Ho, 2022; Lu et al., 2022). Recent theoretical and empirical studies suggest that this inefficiency is closely related to a dynamical phase transition (bifurcation) that occurs during the reverse process (Raya and Ambrogioni, 2024; Biroli et al., 2024; Ambrogioni, 2025). In the early reverse steps, the trajectories stay near a stable fixed point whose distribution is close to the initial independent Gaussian, and little structure is present in the samples.


A Normative and Biologically Plausible Algorithm for Independent Component Analysis

Neural Information Processing Systems

The brain effortlessly solves blind source separation (BSS) problems, but the algorithm it uses remains elusive. In signal processing, linear BSS problems are often solved by Independent Component Analysis (ICA). To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements: 1. The algorithm must operate in the online setting where data samples are streamed one at a time, and the NN computes the sources on the fly without storing any significant fraction of the data in memory.